Hamilton-Jacobi-Bellman equations for the optimal control of a state equation with memory

نویسندگان

  • G. Carlier
  • R. Tahraoui
چکیده

This article is devoted to the optimal control of state equations with memory of the form:

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A New Near Optimal High Gain Controller For The Non-Minimum Phase Affine Nonlinear Systems

In this paper, a new analytical method to find a near-optimal high gain controller for the non-minimum phase affine nonlinear systems is introduced. This controller is derived based on the closed form solution of the Hamilton-Jacobi-Bellman (HJB) equation associated with the cheap control problem. This methodology employs an algebraic equation with parametric coefficients for the systems with s...

متن کامل

Optimal Control of Stochastic Functional Differential Equations with a Bounded Memory∗

This paper treats a finite time horizon optimal control problem in which the controlled state dynamics is governed by a general system of stochastic functional differential equations with a bounded memory. An infinite-dimensional HJB equation is derived using a Bellman-type dynamic programming principle. It is shown that the value function is the unique viscosity solution of the HJB equation.

متن کامل

On the dynamic programming approach for the 3D Navier-Stokes equations

The dynamic programming approach for the control of a 3D flow governed by the stochastic Navier-Stokes equations for incompressible fluid in a bounded domain is studied. By a compactness argument, existence of solutions for the associated Hamilton-Jacobi-Bellman equation is proved. Finally, existence of an optimal control through the feedback formula and of an optimal state is discussed.

متن کامل

Hamilton-Jacobi-Bellman Equations and the Optimal Control of Stochastic Systems

In many applications (engineering, management, economy) one is led to control problems for stochastic systems : more precisely the state of the system is assumed to be described by the solution of stochastic differential equations and the control enters the coefficients of the equation. Using the dynamic programming principle E. Bellman [6] explained why, at least heuristically, the optimal cos...

متن کامل

Hamilton-Jacobi-Bellman equations for Quantum Optimal Feedback Control

We exploit the separation of the …ltering and control aspects of quantum feedback control to consider the optimal control as a classical stochastic problem on the space of quantum states. We derive the corresponding Hamilton-Jacobi-Bellman equations using the elementary arguments of classical control theory and show that this is equivalent, in the Stratonovich calculus, to a stochastic Hamilton...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009